364 research outputs found

    A multilevel account of hippocampal function in spatial and concept learning: Bridging models of behavior and neural assemblies

    Get PDF
    A complete neuroscience requires multilevel theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. We propose an extension to the level of mechanism approach where a computational model of cognition sits in between behavior and brain: It explains the higher-level behavior and can be decomposed into lower-level component mechanisms to provide a richer understanding of the system than any level alone. Toward this end, we decomposed a cognitive model into neuron-like units using a neural flocking approach that parallels recurrent hippocampal activity. Neural flocking coordinates units that collectively form higher-level mental constructs. The decomposed model suggested how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations and why so many neurons are needed for robust performance at the cognitive level. This multilevel explanation provides a way to understand how cognition and symbol-like representations are supported by coordinated neural populations (assemblies) formed through learning

    Learning as the Unsupervised Alignment of Conceptual Systems

    Get PDF
    Concept induction requires the extraction and naming of concepts from noisy perceptual experience. For supervised approaches, as the number of concepts grows, so does the number of required training examples. Philosophers, psychologists, and computer scientists, have long recognized that children can learn to label objects without being explicitly taught. In a series of computational experiments, we highlight how information in the environment can be used to build and align conceptual systems. Unlike supervised learning, the learning problem becomes easier the more concepts and systems there are to master. The key insight is that each concept has a unique signature within one conceptual system (e.g., images) that is recapitulated in other systems (e.g., text or audio). As predicted, children's early concepts form readily aligned systems.Comment: This is a post-peer-review, pre-copyedit version of an article published in Nature Machine Intelligence. The final authenticated version is available online at: https://doi.org/10.1038/s42256-019-0132-

    A neural network account of memory replay and knowledge consolidation

    Get PDF
    Replay can consolidate memories through offline neural reactivation related to past experiences. Category knowledge is learned across multiple experiences, and its subsequent generalization is promoted by consolidation and replay during rest and sleep. However, aspects of replay are difficult to determine from neuroimaging studies. We provided insights into category knowledge replay by simulating these processes in a neural network which approximated the roles of the human ventral visual stream and hippocampus. Generative replay, akin to imagining new category instances, facilitated generalization to new experiences. Consolidation-related replay may therefore help to prepare us for the future as much as remember the past. Generative replay was more effective in later network layers functionally similar to the lateral occipital cortex than layers corresponding to early visual cortex, drawing a distinction between neural replay and its relevance to consolidation. Category replay was most beneficial for newly acquired knowledge, suggesting replay helps us adapt to changes in our environment. Finally, we present a novel mechanism for the observation that the brain selectively consolidates weaker information, namely a reinforcement learning process in which categories were replayed according to their contribution to network performance. This reinforces the idea of consolidation-related replay as an active rather than passive process

    Reassessing hierarchical correspondences between brain and deep networks through direct interface

    Get PDF
    Functional correspondences between deep convolutional neural networks (DCNNs) and the mammalian visual system support a hierarchical account in which successive stages of processing contain ever higher-level information. However, these correspondences between brain and model activity involve shared, not task-relevant, variance. We propose a stricter account of correspondence: If a DCNN layer corresponds to a brain region, then replacing model activity with brain activity should successfully drive the DCNN’s object recognition decision. Using this approach on three datasets, we found that all regions along the ventral visual stream best corresponded with later model layers, indicating that all stages of processing contained higher-level information about object category. Time course analyses suggest that long-range recurrent connections transmit object class information from late to early visual areas

    Sequential consumer choice as multi-cued retrieval

    Get PDF
    Whether adding songs to a playlist or groceries during an online shop, how do we decide what to choose next? We develop a model that predicts such open-ended, sequential choices using a process of cued retrieval from long-term memory. Using the past choice to cue subsequent retrievals, this model predicts the sequential purchases and response times of nearly 5 million grocery purchases made by more than 100,000 online shoppers. Products can be associated in different ways, such as by their episodic association or semantic overlap, and we find that consumers query multiple forms of associative knowledge when retrieving options. Attending to certain knowledge sources, as estimated by our model, predicts important retrieval errors, such as the propensity to forget or add unwanted products. Our results demonstrate how basic memory retrieval mechanisms shape choices in real-world, goal-directed tasks

    Enriching ImageNet with Human Similarity Judgments and Psychological Embeddings

    Get PDF
    Advances in object recognition flourished in part because of the availability of high-quality datasets and associated benchmarks. However, these benchmarks---such as ILSVRC---are relatively task-specific, focusing predominately on predicting class labels. We introduce a publicly-available dataset that embodies the task-general capabilities of human perception and reasoning. The Human Similarity Judgments extension to ImageNet (ImageNet-HSJ) is composed of human similarity judgments that supplement the ILSVRC validation set. The new dataset supports a range of task and performance metrics, including the evaluation of unsupervised learning algorithms. We demonstrate two methods of assessment: using the similarity judgments directly and using a psychological embedding trained on the similarity judgments. This embedding space contains an order of magnitude more points (i.e., images) than previous efforts based on human judgments. Scaling to the full 50,000 image set was made possible through a selective sampling process that used variational Bayesian inference and model ensembles to sample aspects of the embedding space that were most uncertain. This methodological innovation not only enables scaling, but should also improve the quality of solutions by focusing sampling where it is needed. To demonstrate the utility of ImageNet-HSJ, we used the similarity ratings and the embedding space to evaluate how well several popular models conform to human similarity judgments. One finding is that more complex models that perform better on task-specific benchmarks do not better conform to human semantic judgments. In addition to the human similarity judgments, pre-trained psychological embeddings and code for inferring variational embeddings are made publicly available. Collectively, ImageNet-HSJ assets support the appraisal of internal representations and the development of more human-like models

    Nudging investors big and small toward better decisions

    Get PDF
    Investors significantly reduce their future returns by selecting mutual funds with higher fees, allured by higher past returns that do not predict future performance. This suboptimal behavior, which can roughly halve an investor’s retirement savings, is driven by 2 psychological factors. One factor is difficulty comprehending rate information, which is critical given that mutual fund fees and returns are typically communicated in percentages. A second factor is devaluing small differences in returns or fees (i.e., a peanuts effect). These 2 factors interact such that large investors benefit when fees are stated in currency (as opposed to percentages), whereas small investors benefit from returns stated in currency. These striking results suggest behavioral interventions that are tailored specifically for small and large investors
    • …
    corecore